329 research outputs found

    Evolutionary algorithms for dynamic optimization problems: workshop preface

    Get PDF
    Copyright @ 2005 AC

    Genetic algorithms with elitism-based immigrants for changing optimization problems

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2007.Addressing dynamic optimization problems has been a challenging task for the genetic algorithm community. Over the years, several approaches have been developed into genetic algorithms to enhance their performance in dynamic environments. One major approach is to maintain the diversity of the population, e.g., via random immigrants. This paper proposes an elitism-based immigrants scheme for genetic algorithms in dynamic environments. In the scheme, the elite from previous generation is used as the base to create immigrants via mutation to replace the worst individuals in the current population. This way, the introduced immigrants are more adapted to the changing environment. This paper also proposes a hybrid scheme that combines the elitism-based immigrants scheme with traditional random immigrants scheme to deal with significant changes. The experimental results show that the proposed elitism-based and hybrid immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments

    Associative memory scheme for genetic algorithms in dynamic environments

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2006.In recent years dynamic optimization problems have attracted a growing interest from the community of genetic algorithms with several approaches developed to address these problems, of which the memory scheme is a major one. In this paper an associative memory scheme is proposed for genetic algorithms to enhance their performance in dynamic environments. In this memory scheme, the environmental information is also stored and associated with current best individual of the population in the memory. When the environment changes the stored environmental information that is associated with the best re-evaluated memory solution is extracted to create new individuals into the population. Based on a series of systematically constructed dynamic test environments, experiments are carried out to validate the proposed associative memory scheme. The environmental results show the efficiency of the associative memory scheme for genetic algorithms in dynamic environments

    Triggered memory-based swarm optimization in dynamic environments

    Get PDF
    This is a post-print version of this article - Copyright @ 2007 Springer-VerlagIn recent years, there has been an increasing concern from the evolutionary computation community on dynamic optimization problems since many real-world optimization problems are time-varying. In this paper, a triggered memory scheme is introduced into the particle swarm optimization to deal with dynamic environments. The triggered memory scheme enhances traditional memory scheme with a triggered memory generator. Experimental study over a benchmark dynamic problem shows that the triggered memory-based particle swarm optimization algorithm has stronger robustness and adaptability than traditional particle swarm optimization algorithms, both with and without traditional memory scheme, for dynamic optimization problems

    Memory based on abstraction for dynamic fitness functions

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2008.This paper proposes a memory scheme based on abstraction for evolutionary algorithms to address dynamic optimization problems. In this memory scheme, the memory does not store good solutions as themselves but as their abstraction, i.e., their approximate location in the search space. When the environment changes, the stored abstraction information is extracted to generate new individuals into the population. Experiments are carried out to validate the abstraction based memory scheme. The results show the efficiency of the abstraction based memory scheme for evolutionary algorithms in dynamic environments.This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant No. EP/E060722/1

    Robust optimization over time by learning problem space characteristics

    Get PDF
    Robust optimization over time is a new way to tackle dynamic optimization problems where the goal is to find solutions that remain acceptable over an extended period of time. The state-of-the-art methods in this domain try to identify robust solutions based on their future predicted fitness values. However, predicting future fitness values is difficult and error prone. In this paper, we propose a new framework based on a multi-population method in which sub-populations are responsible for tracking peaks and also gathering characteristic information about them. When the quality of the current robust solution falls below the acceptance threshold, the algorithm chooses the next robust solution based on the collected information. We propose four different strategies to select the next solution. The experimental results on benchmark problems show that our newly proposed methods perform significantly better than existing algorithms

    Evolving control rules for a dual-constrained job scheduling scenario

    Get PDF
    Dispatching rules are often used for scheduling in semiconductor manufacturing due to the complexity and stochasticity of the problem. In the past, simulation-based Genetic Programming has been shown to be a powerful tool to automate the time-consuming and expensive process of designing such rules. However, the scheduling problems considered were usually only constrained by the capacity of the machines. In this paper, we extend this idea to dual-constrained flow shop scheduling, with machines and operators for loading and unloading to be scheduled simultaneously. We show empirically on a small test problem with parallel workstations, re-entrant flows and dynamic stochastic job arrival that the approach is able to generate dispatching rules that perform significantly better than benchmark rules from the literature

    Compound particle swarm optimization in dynamic environments

    Get PDF
    Copyright @ Springer-Verlag Berlin Heidelberg 2008.Adaptation to dynamic optimization problems is currently receiving a growing interest as one of the most important applications of evolutionary algorithms. In this paper, a compound particle swarm optimization (CPSO) is proposed as a new variant of particle swarm optimization to enhance its performance in dynamic environments. Within CPSO, compound particles are constructed as a novel type of particles in the search space and their motions are integrated into the swarm. A special reflection scheme is introduced in order to explore the search space more comprehensively. Furthermore, some information preserving and anti-convergence strategies are also developed to improve the performance of CPSO in a new environment. An experimental study shows the efficiency of CPSO in dynamic environments.This work was supported by the Key Program of the National Natural Science Foundation (NNSF) of China under Grant No. 70431003 and Grant No. 70671020, the Science Fund for Creative Research Group of NNSF of China under Grant No. 60521003, the National Science and Technology Support Plan of China under Grant No. 2006BAH02A09 and the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant No. EP/E060722/1

    Parallelizing multi-objective evolutionary algorithms: cone separation

    Get PDF
    Evolutionary multi-objective optimization (EMO) may be computationally quite demanding, because instead of searching for a single optimum, one generally wishes to find the whole front of Pareto-optimal solutions. For that reason, parallelizing EMO is an important issue. Since we are looking for a number of Pareto-optimal solutions with different tradeoffs between the objectives, it seems natural to assign different parts of the search space to different processors. We propose the idea of cone separation which is used to divide up the search space by adding explicit constraints for each process. We show that the approach is more efficient than simple parallelization schemes, and that it also works on problems with a non-convex Pareto-optimal front

    An analysis of the XOR dynamic problem generator based on the dynamical system

    Get PDF
    This is the post-print version of the article - Copyright @ 2010 Springer-VerlagIn this paper, we use the exact model (or dynamical system approach) to describe the standard evolutionary algorithm (EA) as a discrete dynamical system for dynamic optimization problems (DOPs). Based on this dynamical system model, we analyse the properties of the XOR DOP Generator, which has been widely used by researchers to create DOPs from any binary encoded problem. DOPs generated by this generator are described as DOPs with permutation, where the fitness vector is changed according to a permutation matrix. Some properties of DOPs with permutation are analyzed, which allows explaining some behaviors observed in experimental results. The analysis of the properties of problems created by the XOR DOP Generator is important to understand the results obtained in experiments with this generator and to analyze the similarity of such problems to real world DOPs.This work was supported by Brazil FAPESP under Grant 04/04289-6 and by UK EPSRC under Grant EP/E060722/2
    corecore